Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Systems driven far from equilibrium often retain structural memories of their processing history. This memory has, in some cases, been shown to dramatically alter the material response. For example, work hardening in crystalline metals can alter the hardness, yield strength, and tensile strength to prevent catastrophic failure. Whether memory of processing history can be similarly exploited in flowing systems, where significantly larger changes in structure should be possible, remains poorly understood. Here, we demonstrate a promising route to embedding such useful memories. We build on work showing that exposing a sheared dense suspension to acoustic perturbations of different power allows for dramatically tuning the sheared suspension viscosity and underlying structure. We find that, for sufficiently dense suspensions, upon removing the acoustic perturbations, the suspension shear jams with shear stress contributions from the maximum compressive and maximum extensive axes that reflect or “remember” the acoustic training. Because the contributions from these two orthogonal axes to the total shear stress are antagonistic, it is possible to tune the resulting suspension response in surprising ways. For example, we show that differently trained sheared suspensions exhibit (1) different susceptibility to the same acoustic perturbation, (2) orders of magnitude changes in their instantaneous viscosities upon shear reversal, and (3) even a shear stress that increases in magnitude upon shear cessation. We work through these examples to explain the underlying mechanisms governing each behavior. Then, to illustrate the power of this approach for controlling suspension properties, we demonstrate that flowing states well below the shear jamming threshold can be shear jammed via acoustic training. Collectively, our work paves the way for using acoustically induced memory in dense suspensions to generate rapidly and widely tunable materials. Published by the American Physical Society2024more » « less
-
We develop information-geometric techniques to analyze the trajectories of the predictions of deep networks during training. By examining the underlying high-dimensional probabilistic models, we reveal that the training process explores an effectively low-dimensional manifold. Networks with a wide range of architectures, sizes, trained using different optimization methods, regularization techniques, data augmentation techniques, and weight initializations lie on the same manifold in the prediction space. We study the details of this manifold to find that networks with different architectures follow distinguishable trajectories, but other factors have a minimal influence; larger networks train along a similar manifold as that of smaller networks, just faster; and networks initialized at very different parts of the prediction space converge to the solution along a similar manifold.more » « less
-
Nearly, all dense suspensions undergo dramatic and abrupt thickening transitions in their flow behavior when sheared at high stresses. Such transitions occur when the dominant interactions between the suspended particles shift from hydrodynamic to frictional. Here, we interpret abrupt shear thickening as a precursor to a rigidity transition and give a complete theory of the viscosity in terms of a universal crossover scaling function from the frictionless jamming point to a rigidity transition associated with friction, anisotropy, and shear. Strikingly, we find experimentally that for two different systems—cornstarch in glycerol and silica spheres in glycerol—the viscosity can be collapsed onto a single universal curve over a wide range of stresses and volume fractions. The collapse reveals two separate scaling regimes due to a crossover between frictionless isotropic jamming and frictional shear jamming, with different critical exponents. The material-specific behavior due to the microscale particle interactions is incorporated into a scaling variable governing the proximity to shear jamming, that depends on both stress and volume fraction. This reformulation opens the door to importing the vast theoretical machinery developed to understand equilibrium critical phenomena to elucidate fundamental physical aspects of the shear thickening transition.more » « less
-
We propose a design paradigm for multistate machines where transitions from one state to another are organized by bifurcations of multiple equilibria of the energy landscape describing the collective interactions of the machine components. This design paradigm is attractive since, near bifurcations, small variations in a few control parameters can result in large changes to the system’s state providing an emergent lever mechanism. Further, the topological configuration of transitions between states near such bifurcations ensures robust operation, making the machine less sensitive to fabrication errors and noise. To design such machines, we develop and implement a new efficient algorithm that searches for interactions between the machine components that give rise to energy landscapes with these bifurcation structures. We demonstrate a proof of concept for this approach by designing magnetoelastic machines whose motions are primarily guided by their magnetic energy landscapes and show that by operating near bifurcations we can achieve multiple transition pathways between states. This proof of concept demonstration illustrates the power of this approach, which could be especially useful for soft robotics and at the microscale where typical macroscale designs are difficult to implement.more » « less
-
We develop information geometric techniques to understand the representations learned by deep networks when they are trained on different tasks using supervised, meta-, semi-supervised and con- trastive learning. We shed light on the following phenomena that relate to the structure of the space of tasks: (1) the manifold of probabilistic models trained on different tasks using different represen- tation learning methods is effectively low-dimen- sional; (2) supervised learning on one task results in a surprising amount of progress even on seem- ingly dissimilar tasks; progress on other tasks is larger if the training task has diverse classes; (3) the structure of the space of tasks indicated by our analysis is consistent with parts of the Word- net phylogenetic tree; (4) episodic meta-learning algorithms and supervised learning traverse differ- ent trajectories during training but they fit similar models eventually; (5) contrastive and semi-su- pervised learning methods traverse trajectories similar to those of supervised learning. We use classification tasks constructed from the CIFAR- 10 and Imagenet datasets to study these phenom- ena. Code is available at https://github.com/grasp- lyrl/picture of space of tasks.more » « less
-
Abstract Complex models in physics, biology, economics, and engineering are often sloppy , meaning that the model parameters are not well determined by the model predictions for collective behavior. Many parameter combinations can vary over decades without significant changes in the predictions. This review uses information geometry to explore sloppiness and its deep relation to emergent theories. We introduce the model manifold of predictions, whose coordinates are the model parameters. Its hyperribbon structure explains why only a few parameter combinations matter for the behavior. We review recent rigorous results that connect the hierarchy of hyperribbon widths to approximation theory, and to the smoothness of model predictions under changes of the control variables. We discuss recent geodesic methods to find simpler models on nearby boundaries of the model manifold—emergent theories with fewer parameters that explain the behavior equally well. We discuss a Bayesian prior which optimizes the mutual information between model parameters and experimental data, naturally favoring points on the emergent boundary theories and thus simpler models. We introduce a ‘projected maximum likelihood’ prior that efficiently approximates this optimal prior, and contrast both to the poor behavior of the traditional Jeffreys prior. We discuss the way the renormalization group coarse-graining in statistical mechanics introduces a flow of the model manifold, and connect stiff and sloppy directions along the model manifold with relevant and irrelevant eigendirections of the renormalization group. Finally, we discuss recently developed ‘intensive’ embedding methods, allowing one to visualize the predictions of arbitrary probabilistic models as low-dimensional projections of an isometric embedding, and illustrate our method by generating the model manifold of the Ising model.more » « less
An official website of the United States government

Full Text Available